The notion of a boosting algorithm was originally introduced by Valiant in the context of the “probably approximately correct ” (PAC) model of learnability [19]. In this context boosting is a method for provably improving the accuracy of any “weak ” classification learning algorithm. The first boosting algorithm was invented by Schapire [16] and the second one by Freund [2]. These two algorithms were introduced for a specific theoretical purpose. However, since the introduction of AdaBoost [5], quite a number of perspectives on boosting have emerged. For instance, AdaBoost can be understood as a method for maximizing the “margins ” or “confidences” of the training examples [17]; as a technique for playing repeated matrix games [4, 6]; as a ...